画像・音響処理
Image/Sound Processing
P2-1-246
高速撮影カメラを用いたマウス視線計測システム
Real-Time Eye Tracking System for Mouse Using a High Frame-Rate Digital Camera

○松田圭司1, 清水直樹2, 杉田祐子3, 河野憲二3, 三浦健一郎3
○Keiji Matsuda1, Naoki Shimizu2, Yuko Sugita3, Kenji Kawano3, Kenichiro Miura3
産総研ヒューマンライフ1, 奈良県立医科大耳鼻科2, 京都大院医認知行動脳科学3
Human Technology Res Inst, AIST, Ibaraki, Japan1, Dept. Otorhinolaryng. Nara. Med. Univ, Nara, Japn2, Dept. Integ Brain Sci. Grad.Sch. Med. Kyoto Univ, Kyoto, Japn3

To measure eye movements of mice at high resolution and high sampling rate, we developed a non-invasive and inexpensive eye tracking system by adopting an IEEE-1394b digital camera. Infrared light illuminates the eye and the reflected image of the iris and the black image of the pupil are captured by the camera. The center of the pupil is calculated by fitting an ellipse and tracked over time. The adoption of the WINDOWS 7 x 64 as the operation system makes this eye tracking system user-friendly. The system was originally developed for humans and monkeys, and improved for mice with consideration to the following characteristics of the mouse's eye. 1) The size of the eye is quite small (~3.4mm) compared to the human or monkey. 2) The edge of the pupil of the mouse's eye is not smooth, in contrast to the human or monkey's, which is smooth enough to allow an accurate ellipse fitting. 3) It is difficult to train a mouse to fixate on targets placed at several external positions which are used for the active calibration for the human or monkey. 1) To overcome the small size of the mouse's eye, we illuminated the eye by an infrared light system with a halogen bulb and flexible optical fiber light-guides. The image of the eye reflected by a hot mirror was monitored by the digital camera with a size conversion adapter and a rear expansion lens attached. The hot mirror reflected infrared light but transmitted visible light, thus the mouse could see the visual stimulus behind the mirror. To keep sufficient luminance of the iris, we modulated the frame-rate of the digital video camera. 2) A noise reduction algorithm was adopted to calculate the ellipse to fit the noisy pupil edge. 3) Eye positions are passively calibrated by moving a large-field visual stimulus eliciting horizontal/vertical eye movements. By using this system, we succeeded in characterizing the short-latency ocular responses in the initial phase of the vertical optokinetic response in mice.
P2-1-247
ミリ波アクティブ・セキュリティ・イメージング・システムにおける複素テクスチャの適応的区分のための複素自己組織化マップのダイナミクス
Dynamics of complex-valued self-organizing map for adaptive classification of complex texture in millimeter-wave active security imaging systems

○有馬悠也1, 小野島昇吾1, 廣瀬明1
○Yuya Arima1, Shogo Onojima1, Akira Hirose1
東京大学 工学部電子情報工学科1
Dept EE&IS, Univ of Tokyo, Tokyo1

We report the characteristics of our previously proposed millimeter-wave imaging system for moving targets consisting of one-dimensional array antenna, parallel front-end and complex-valued self-organizing map (CSOM) to deal with complex texture. The system uses Ka-band millimeter wave. Because the wavelength is very short, the electromagnetic-wave propagates straightforward almost like lightwave. It follows that the system is free from the synthetic aperture processing, resulting in low calculation cost. We also employ the so-called envelope-phase detection (EPD) scheme, in which we modulate the millimeter-wave carrier by amplitude modulation and, at the receiver array, we detect the scattered wave with envelope detection so that the receiver is sensitive to the amplitude-modulation phase. The modulation frequency is stepped over ~1GHz. This bandwidth was determined in such a way that the EPD resolution is suitable for human-size objects. The array consists of the bulk linearly tapered slot antenna (bulk-LTSA) elements having so low impedance that they can be followed directly by envelope detectors. Then the array front-end requires no millimeter-wave amplifier. This fact leads to the cost reduction of the system. In the adaptive processing, we pay attention to and extract the complex-valued texture in the measured complex-valued image. The texture is dependent on the object scatterers. Then we employ the CSOM dynamics to classify the texture in the complex domain. The CSOM imaging process has a set of parameters such as the number of neurons, self-organization coefficients for winners and neighbors, and local window size. We analyze and report the dynamics and the performance experimentally.
P2-1-248
Withdrawn

上部に戻る 前に戻る